19 research outputs found

    Learning to Extract a Video Sequence from a Single Motion-Blurred Image

    Full text link
    We present a method to extract a video sequence from a single motion-blurred image. Motion-blurred images are the result of an averaging process, where instant frames are accumulated over time during the exposure of the sensor. Unfortunately, reversing this process is nontrivial. Firstly, averaging destroys the temporal ordering of the frames. Secondly, the recovery of a single frame is a blind deconvolution task, which is highly ill-posed. We present a deep learning scheme that gradually reconstructs a temporal ordering by sequentially extracting pairs of frames. Our main contribution is to introduce loss functions invariant to the temporal order. This lets a neural network choose during training what frame to output among the possible combinations. We also address the ill-posedness of deblurring by designing a network with a large receptive field and implemented via resampling to achieve a higher computational efficiency. Our proposed method can successfully retrieve sharp image sequences from a single motion blurred image and can generalize well on synthetic and real datasets captured with different cameras

    Deep Mean-Shift Priors for Image Restoration

    Full text link
    In this paper we introduce a natural image prior that directly represents a Gaussian-smoothed version of the natural image distribution. We include our prior in a formulation of image restoration as a Bayes estimator that also allows us to solve noise-blind image restoration problems. We show that the gradient of our prior corresponds to the mean-shift vector on the natural image distribution. In addition, we learn the mean-shift vector field using denoising autoencoders, and use it in a gradient descent approach to perform Bayes risk minimization. We demonstrate competitive results for noise-blind deblurring, super-resolution, and demosaicing.Comment: NIPS 201

    Learning to See through Reflections

    Get PDF
    "Pictures of objects behind a glass are difficult to interpret" "and understand due to the superposition of two real images: a reflection layer and a background layer. Separation of these two layers is challenging due to the ambiguities in as- signing texture patterns and the average color in the input image to one of the two layers. In this paper, we propose a novel method to reconstruct these layers given a single input image by explicitly handling the ambiguities of the re- construction. Our approach combines the ability of neural networks to build image priors on large image regions with an image model that accounts for the brightness ambiguity and saturation. We find that our solution generalizes to real images even in the presence of strong reflections. Extensive quantitative and qualitative experimental evaluations on both real and synthetic data show the benefits of our approach over prior work. Moreover, our proposed neural network is computationally and memory efficient.

    MFDNet: Towards Real-time Image Denoising On Mobile Devices

    Full text link
    Deep convolutional neural networks have achieved great progress in image denoising tasks. However, their complicated architectures and heavy computational cost hinder their deployments on a mobile device. Some recent efforts in designing lightweight denoising networks focus on reducing either FLOPs (floating-point operations) or the number of parameters. However, these metrics are not directly correlated with the on-device latency. By performing extensive analysis and experiments, we identify the network architectures that can fully utilize powerful neural processing units (NPUs) and thus enjoy both low latency and excellent denoising performance. To this end, we propose a mobile-friendly denoising network, namely MFDNet. The experiments show that MFDNet achieves state-of-the-art performance on real-world denoising benchmarks SIDD and DND under real-time latency on mobile devices. The code and pre-trained models will be released.Comment: Under review at the 2023 IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP 2023

    Motion Deblurring from a Single Image

    No full text
    With the information explosion, a tremendous amount photos is captured and shared via social media everyday. Technically, a photo requires a finite exposure to accumulate light from the scene. Thus, objects moving during the exposure generate motion blur in a photo. Motion blur is an image degradation that makes visual content less interpretable and is therefore often seen as a nuisance. Although motion blur can be reduced by setting a short exposure time, an insufficient amount of light has to be compensated through increasing the sensor’s sensitivity, which will inevitably bring large amount of sensor noise. Thus this motivates the necessity of removing motion blur computationally. Motion deblurring is an important problem in computer vision and it is challenging due to its ill-posed nature, which means the solution is not well defined. Mathematically, a blurry image caused by uniform motion is formed by the convolution operation between a blur kernel and a latent sharp image. Potentially there are infinite pairs of blur kernel and latent sharp image that can result in the same blurry image. Hence, some prior knowledge or regularization is required to address this problem. Even if the blur kernel is known, restoring the latent sharp image is still difficult as the high frequency information has been removed. Although we can model the uniform motion deblurring problem mathematically, it can only address the camera in-plane translational motion. Practically, motion is more complicated and can be non-uniform. Non-uniform motion blur can come from many sources, camera out-of-plane rotation, scene depth change, object motion and so on. Thus, it is more challenging to remove non-uniform motion blur. In this thesis, our focus is motion blur removal. We aim to address four challenging motion deblurring problems. We start from the noise blind image deblurring scenario where blur kernel is known but the noise level is unknown. We introduce an efficient and robust solution based on a Bayesian framework using a smooth generalization of the 0 − 1 loss to address this problem. Then we study the blind uniform motion deblurring scenario where both the blur kernel and the latent sharp image are unknown. We explore the relative scale ambiguity between the latent sharp image and blur kernel to address this issue. Moreover, we study the face deblurring problem and introduce a novel deep learning network architecture to solve it. We also address the general motion deblurring problem and particularly we aim at recovering a sequence of 7 frames each depicting some instantaneous motion of the objects in the scen

    Learning to Extract Flawless Slow Motion from Blurry Videos

    Get PDF
    In this paper, we introduce the task of generating a sharp slow-motion video given a low frame rate blurry video. We propose a data-driven approach, where the training data is captured with a high frame rate camera and blurry images are simulated through an averaging process. While it is possible to train a neural network to recover the sharp frames from their average, there is no guarantee of the temporal smoothness for the formed video, as the frames are estimated independently. To address the temporal smoothness requirement we propose a system with two networks: One, DeblurNet, to predict sharp keyframes and the second, InterpNet, to predict intermediate frames between the generated keyframes. A smooth transition is ensured by interpolating between consecutive keyframes using InterpNet. Moreover, the proposed scheme enables further increase in frame rate without retraining the network, by applying InterpNet recursively between pairs of sharp frames. We evaluate the proposed method on several datasets, including a novel dataset captured with a Sony RX V camera. We also demonstrate its performance of increasing the frame rate up to 20 times on real blurry videos

    Propagation Based Selective Sampling for Alpha Matting

    No full text
    MasterImage matting refers to the problem of foreground extraction and transparency determination of the image. It is widely used in various industries and has been studied for many years. Although various matting algorithms have been proposed, most of them are not robust enough to obtain satisfactory matting results in different regions of an image, such as smooth regions, non-uniform color distribution regions and isolated color regions. The main motivation of this thesis is to develop a new matting algorithm which can extract high quality mattes from different regions of the image. Our proposed algorithm combines color sampling and propagation methods together. Unlike previous propagation-based approaches, we proposed an adaptive local and nonlocal propagation-based approach according to the detection result of different regions in the image. Our color sampling strategy, which is based on the characteristics of the superpixel, requires much less computation cost than previous methods. Experimental results show that the proposed algorithm can effectively handle different regions in the image and obtain better matting results than previous matting algorithms

    KNN-Based Color Line Model for Image Matting 

    No full text
    1

    Adaptive Propagation-Based Color-Sampling for Alpha Matting

    No full text
    Image matting refers to the problem of foreground extraction from an image and transparency determination of the pixels. Although other matting algorithms have been proposed, most are not sufficiently robust to obtain satisfactory matting results in different regions of an image, such as smooth regions, nonuniform color distribution regions and isolated color regions. This paper proposes a novel matting algorithm that can extract high-quality mattes from different regions of an image. Our proposed algorithm combines propagation and color-sampling methods. Unlike previous propagation-based approaches that use either local or nonlocal propagation methods, our propagation framework adaptively uses both local and nonlocal processes according to the detection results of the different regions in the image. Our color-sampling strategy, which is based on the characteristics of the superpixel, uses a simple sample selection criterion and requires significantly less computational cost than previous color-sampling methods. Experimental results show that our adaptive propagation framework, alone, outperforms the state-of-the-art propagation-based approaches. Combined with our color-sampling method, it can effectively handle different regions in the image and produce both visually and quantitatively high-quality matting results.11Nsciescopu

    Bilayer Blind Deconvolution with the Light Field Camera

    No full text
    In this paper we propose a solution to blind deconvolution of a scene with two layers (foreground/background). We show that the reconstruction of the support of these two layers from a single image of a conventional camera is not possible. As a solution we propose to use a light field camera. We demonstrate that a single light field image captured with a Lytro camera can be successfully deblurred. More specifically, we consider the case of space-varying motion blur, where the blur magnitude depends on the depth changes in the scene. Our method employs a layered model that handles occlusions and partial transparencies due to both motion blur and out of focus blur of the plenoptic camera. We reconstruct each layer support, the corresponding sharp textures, and motion blurs via an optimization scheme. The performance of our algorithm is demonstrated on synthetic as well as real light field images
    corecore